Better latent spaces for better autoencoders

نویسندگان

چکیده

Autoencoders as tools behind anomaly searches at the LHC have structural problem that they only work in one direction, extracting jets with higher complexity but not other way around. To address this, we derive classifiers from latent space of (variational) autoencoders, specifically Gaussian mixture and Dirichlet spaces. In particular, setup solves improves both performance interpretability networks.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Challenges for Better thesis supervision

  Background: Conduction of thesis by the students is one of their major academic activities. Thesis quality and acquired experiences are highly dependent on the supervision. Our study is aimed at identifing the challenges in thesis supervision from both students and faculty members point of view.   Methods : This study was conducted using individual in-depth interviews and Focus Group Discussi...

متن کامل

Predicting Customer-Expectation-Based Warranty Cost for Smaller-the- Better and Larger-the-Better Performance Characteristics

The quality loss function assumes a fixed target and only accounts for immediate issues within manufacturing facilities whereas warranty loss occurs during customer use. Based on the two independent variables, product performance and consumers’ expectation, a methodology to predict the probability of customer complaint is presented in this paper. The formulation presented will serve as a basic ...

متن کامل

Getting Better Results With Latent Semantic Indexing

The paper presents an overview of some important factors influencing the quality of the results obtained when using Latent Semantic Indexing. The factors are separated in 5 major groups and analyzed both separately and as whole. A new class of extended Boolean operations such as OR, AND and NOT (ANDNOT) and their combinations is proposed and evaluated on a corpus of religious

متن کامل

Better Informed Training of Latent Syntactic Features

We study unsupervised methods for learning refinements of the nonterminals in a treebank. Following Matsuzaki et al. (2005) and Prescher (2005), we may for example split NP without supervision into NP[0] and NP[1], which behave differently. We first propose to learn a PCFG that adds such features to nonterminals in such a way that they respect patterns of linguistic feature passing: each node’s...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: SciPost physics

سال: 2021

ISSN: ['2542-4653']

DOI: https://doi.org/10.21468/scipostphys.11.3.061